10 research outputs found

    Learning over time using a neuromorphic adaptive control algorithm for robotic arms

    Full text link
    In this paper, we explore the ability of a robot arm to learn the underlying operation space defined by the positions (x, y, z) that the arm's end-effector can reach, including disturbances, by deploying and thoroughly evaluating a Spiking Neural Network SNN-based adaptive control algorithm. While traditional control algorithms for robotics have limitations in both adapting to new and dynamic environments, we show that the robot arm can learn the operational space and complete tasks faster over time. We also demonstrate that the adaptive robot control algorithm based on SNNs enables a fast response while maintaining energy efficiency. We obtained these results by performing an extensive search of the adaptive algorithm parameter space, and evaluating algorithm performance for different SNN network sizes, learning rates, dynamic robot arm trajectories, and response times. We show that the robot arm learns to complete tasks 15% faster in specific experiment scenarios such as scenarios with six or nine random target points

    Neuromorphic Visual Odometry with Resonator Networks

    Full text link
    Autonomous agents require self-localization to navigate in unknown environments. They can use Visual Odometry (VO) to estimate self-motion and localize themselves using visual sensors. This motion-estimation strategy is not compromised by drift as inertial sensors or slippage as wheel encoders. However, VO with conventional cameras is computationally demanding, limiting its application in systems with strict low-latency, -memory, and -energy requirements. Using event-based cameras and neuromorphic computing hardware offers a promising low-power solution to the VO problem. However, conventional algorithms for VO are not readily convertible to neuromorphic hardware. In this work, we present a VO algorithm built entirely of neuronal building blocks suitable for neuromorphic implementation. The building blocks are groups of neurons representing vectors in the computational framework of Vector Symbolic Architecture (VSA) which was proposed as an abstraction layer to program neuromorphic hardware. The VO network we propose generates and stores a working memory of the presented visual environment. It updates this working memory while at the same time estimating the changing location and orientation of the camera. We demonstrate how VSA can be leveraged as a computing paradigm for neuromorphic robotics. Moreover, our results represent an important step towards using neuromorphic computing hardware for fast and power-efficient VO and the related task of simultaneous localization and mapping (SLAM). We validate this approach experimentally in a simple robotic task and with an event-based dataset, demonstrating state-of-the-art performance in these settings.Comment: 14 pages, 5 figures, minor change

    Neuromorphic Visual Scene Understanding with Resonator Networks

    Full text link
    Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic solution that utilizes an efficient factorization network based on three key concepts: (1) a computational framework based on Vector Symbolic Architectures (VSA) with complex-valued vectors; (2) the design of Hierarchical Resonator Networks (HRN) to deal with the non-commutative nature of translation and rotation in visual scenes, when both are used in combination; (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued vector binding on neuromorphic hardware. The VSA framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. The HRN enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. In this work, we demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes. A companion paper demonstrates this approach in real-world application scenarios for machine vision and robotics.Comment: 15 pages, 6 figures, minor change

    Bioinspired smooth neuromorphic control for robotic arms

    No full text
    Beyond providing accurate movements, achieving smooth motion trajectories is a long-standing goal of robotics control theory for arms aiming to replicate natural human movements. Drawing inspiration from biological agents, whose reaching control networks effortlessly give rise to smooth and precise movements, can simplify these control objectives for robot arms. Neuromorphic processors, which mimic the brain’s computational principles, are an ideal platform to approximate the accuracy and smoothness of biological controllers while maximizing their energy efficiency and robustness. However, the incompatibility of conventional control methods with neuromorphic hardware limits the computational efficiency and explainability of their existing adaptations. In contrast, the neuronal subnetworks underlying smooth and accurate reaching movements are effective, minimal, and inherently compatible with neuromorphic hardware. In this work, we emulate these networks with a biologically realistic spiking neural network for motor control on neuromorphic hardware. The proposed controller incorporates experimentally-identified short-term synaptic plasticity and specialized neurons that regulate sensory feedback gain to provide smooth and accurate joint control across a wide motion range. Concurrently, it preserves the minimal complexity of its biological counterpart and is directly deployable on Intel’s neuromorphic processor. Using the joint controller as a building block and inspired by joint coordination in human arms, we scaled up this approach to control real-world robot arms. The trajectories and smooth, bell-shaped velocity profiles of the resulting motions resembled those of humans, verifying the biological relevance of the controller. Notably, the method achieved state-of-the-art control performance while decreasing the motion jerk by 19% to improve motion smoothness. Overall, this work suggests that control solutions inspired by experimentally identified neuronal architectures can provide effective, neuromorphic-controlled robots

    Neuromorphic Visual Scene Understanding with Resonator Networks (in brief)

    No full text
    Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic framework that poses scene understanding as a factorization problem and uses a resonator network to extract object identities and their transformations. The framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. We also describe a hierarchical resonator network that enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition, and for rotation and scaling within the other partition. We demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes

    Video_1_Adaptive control of a wheelchair mounted robotic arm with neuromorphically integrated velocity readings and online-learning.MOV

    No full text
    Wheelchair-mounted robotic arms support people with upper extremity disabilities with various activities of daily living (ADL). However, the associated cost and the power consumption of responsive and adaptive assistive robotic arms contribute to the fact that such systems are in limited use. Neuromorphic spiking neural networks can be used for a real-time machine learning-driven control of robots, providing an energy efficient framework for adaptive control. In this work, we demonstrate a neuromorphic adaptive control of a wheelchair-mounted robotic arm deployed on Intel’s Loihi chip. Our algorithm design uses neuromorphically represented and integrated velocity readings to derive the arm’s current state. The proposed controller provides the robotic arm with adaptive signals, guiding its motion while accounting for kinematic changes in real-time. We pilot-tested the device with an able-bodied participant to evaluate its accuracy while performing ADL-related trajectories. We further demonstrated the capacity of the controller to compensate for unexpected inertia-generating payloads using online learning. Videotaped recordings of ADL tasks performed by the robot were viewed by caregivers; data summarizing their feedback on the user experience and the potential benefit of the system is reported.</p
    corecore